25 research outputs found

    Sharing a conceptual model of grid resources and services

    Full text link
    Grid technologies aim at enabling a coordinated resource-sharing and problem-solving capabilities over local and wide area networks and span locations, organizations, machine architectures and software boundaries. The heterogeneity of involved resources and the need for interoperability among different grid middlewares require the sharing of a common information model. Abstractions of different flavors of resources and services and conceptual schemas of domain specific entities require a collaboration effort in order to enable a coherent information services cooperation. With this paper, we present the result of our experience in grid resources and services modelling carried out within the Grid Laboratory Uniform Environment (GLUE) effort, a joint US and EU High Energy Physics projects collaboration towards grid interoperability. The first implementation-neutral agreement on services such as batch computing and storage manager, resources such as the hierarchy cluster, sub-cluster, host and the storage library are presented. Design guidelines and operational results are depicted together with open issues and future evolutions.Comment: 4 pages, 0 figures, CHEP 200

    Quality of services for remote control in High Energy Physics experiments: a case study

    Get PDF
    Abstract The development of new advanced applications and the evolution in networking are two related processes which greatly benefit from two-way exchanges and from progress in both fields. In this study we show how mission-oriented networked applications can be effectively deployed for research purposes if coupled to the support of Quality of Service (QoS) in IP networks. QoS is one of the latest research topics in network engineering. In this article we focus on two specific examples of networked applications: remote instrumentation control and remote display of analysis data when applied for the support of experiments in the high energy physics field. In this paper we focus on the application requirements: the availability of a reliable transmission channel, limited one-way delay for timely interactions between servers and clients and fairness in network resources allocation in case of contention. The above-mentioned requirements can be addressed through the support of QoS, i.e. through the differential treatment of packets on the end-to-end data path. Several technologies and protocols for QoS support in packet networks have been devised during the last years by the research community. In this study we focus on the Differentiated Services (diffserv) approach, an architecture characterized by high scalability, flexibility and interoperability. In this paper we identify the application requirements and we quantitatively specify the corresponding service profiles. The diffserv network architecture needed to support the services is defined in terms of functional blocks (policing, classification, marking and scheduling) and of their placement in the network. Finally, for each of them the configuration best suited to remote control support is defined

    DataTAG Contributing to LCG-0 Pilot Startup

    Get PDF
    The DataTAG project has contributed to the creation of the middleware distribution constituting the base of the LCG-0 pilot. This distribution has demonstrated the possibility of building an EDG release based on iVDGL/VDT, integrating the GLUE schema and early components of the EDG middleware

    SGSI project at CNAF

    Get PDF
    The Italian Tier1 center is mainly focused on LHC and physics experiments in general. Recently we tried to widen our area of activity and established a collaboration with the University of Bologna to set-up an area inside our computing center for hosting experiments with high demands of security and privacy requirements on stored data. The first experiment we are going to host is Harmony, a project part of IMI's Big Data for Better Outcomes programme (IMI stands for Innovative Medicines Initiative). In order to be able to accept this kind of data we had to make a subset of our computing center compliant with the ISO 27001 regulation. In this article we will describe the SGSI project (Sistema Gestione Sicurezza Informazioni, Information Security Management System) with details of all the processes we have been through in order to become ISO 27001 compliant, with a particular focus on the separation of the project dedicated resources from all the others hosted in the center. We will also describe the software solutions adopted to allow this project to accept in the future any experiment or collaboration in need for this kind of security procedures

    PRENYLATED CURCUMIN ANALOGUES AS MULTIPOTENT TOOLS TO TACKLE ALZHEIMER'S DISEASE

    Get PDF
    Alzheimer's disease is likely to be caused by copathogenic factors including aggregation of A\u3b2 peptides into oligomers and fibrils, neuroinflammation and oxidative stress. To date, no effective treatments are available and because of the multifactorial nature of the disease, it emerges the need to act on different and simultaneous fronts. Despite the multiple biological activities ascribed to curcumin as neuroprotector, its poor bioavailability and toxicity limit the success in clinical outcomes. To tackle Alzheimer's disease on these aspects, the curcumin template was suitably modified and a small set of analogues was attained. In particular, derivative 1 turned out to be less toxic than curcumin. As evidenced by capillary electrophoresis and transmission electron microscopy studies, 1 proved to inhibit the formation of large toxic A\u3b2 oligomers, by shifting the equilibrium towards smaller non-toxic assemblies and to limit the formation of insoluble fibrils. These findings were supported by molecular docking and steered molecular dynamics simulations which confirmed the superior capacity of 1 to bind A\u3b2 structures of different complexity. Remarkably, 1 also showed in vitro anti-inflammatory and anti-oxidant properties. In summary, the curcumin-based analogue 1 emerged as multipotent compound worth to be further investigated and exploited in the Alzheimer's disease multi-target context

    High availability using virtualization - 3RC

    Full text link
    High availability has always been one of the main problems for a data center. Till now high availability was achieved by host per host redundancy, a highly expensive method in terms of hardware and human costs. A new approach to the problem can be offered by virtualization. Using virtualization, it is possible to achieve a redundancy system for all the services running on a data center. This new approach to high availability allows the running virtual machines to be distributed over a small number of servers, by exploiting the features of the virtualization layer: start, stop and move virtual machines between physical hosts. The 3RC system is based on a finite state machine, providing the possibility to restart each virtual machine over any physical host, or reinstall it from scratch. A complete infrastructure has been developed to install operating system and middleware in a few minutes. To virtualize the main servers of a data center, a new procedure has been developed to migrate physical to virtual hosts. The whole Grid data center SNS-PISA is running at the moment in virtual environment under the high availability system.Comment: 10 page

    Gluedomains: Organization and accessibility of network monitoring data in a grid

    No full text
    The availability of the outcome of network monitoring activities, while valuable for the operation of Grid applications, poses serious scalability problems: in principle, in a Grid composed of n resources, we need to keep record of n 2 end-to-end paths. We introduce a scalable approach to network monitoring, that consists in partitioning the Grid into Domains, limiting monitoring activity to the measurement of Domain-to-Domain connectivity. Partitions must be consistent with network performance, since we expect that an observed network performance between Domains is representative of the performance between the Grid Services included into such Domains. We argue that partition design is a critical step: a consequence of an inconsistent partitioning is the production of invalid characteristics. The paper discusses such approach, also exploring its limits. We describe a fully functional prototype which is currently under test in the frame of the DATATAG project
    corecore